245 research outputs found

    A Hierarchical Procedure for the Synthesis of ANFIS Networks

    Get PDF
    Adaptive neurofuzzy inference systems (ANFIS) represent an efficient technique for the solution of function approximation problems. When numerical samples are available in this regard, the synthesis of ANFIS networks can be carried out exploiting clustering algorithms. Starting from a hyperplane clustering synthesis in the joint input-output space, a computationally efficient optimization of ANFIS networks is proposed in this paper. It is based on a hierarchical constructive procedure, by which the number of rules is progressively increased and the optimal one is automatically determined on the basis of learning theory in order to maximize the generalization capability of the resulting ANFIS network. Extensive computer simulations prove the validity of the proposed algorithm and show a favorable comparison with other well-established techniques

    Prediction in Photovoltaic Power by Neural Networks

    Get PDF
    The ability to forecast the power produced by renewable energy plants in the short and middle term is a key issue to allow a high-level penetration of the distributed generation into the grid infrastructure. Forecasting energy production is mandatory for dispatching and distribution issues, at the transmission system operator level, as well as the electrical distributor and power system operator levels. In this paper, we present three techniques based on neural and fuzzy neural networks, namely the radial basis function, the adaptive neuro-fuzzy inference system and the higher-order neuro-fuzzy inference system, which are well suited to predict data sequences stemming from real-world applications. The preliminary results concerning the prediction of the power generated by a large-scale photovoltaic plant in Italy confirm the reliability and accuracy of the proposed approaches

    Learning from distributed data sources using random vector functional-link networks

    Get PDF
    One of the main characteristics in many real-world big data scenarios is their distributed nature. In a machine learning context, distributed data, together with the requirements of preserving privacy and scaling up to large networks, brings the challenge of designing fully decentralized training protocols. In this paper, we explore the problem of distributed learning when the features of every pattern are available throughout multiple agents (as is happening, for example, in a distributed database scenario). We propose an algorithm for a particular class of neural networks, known as Random Vector Functional-Link (RVFL), which is based on the Alternating Direction Method of Multipliers optimization algorithm. The proposed algorithm allows to learn an RVFL network from multiple distributed data sources, while restricting communication to the unique operation of computing a distributed average. Our experimental simulations show that the algorithm is able to achieve a generalization accuracy comparable to a fully centralized solution, while at the same time being extremely efficient

    Ottimizzazione Strutturale di Reti Neurofuzzy

    Get PDF
    Nella tesi sono stati scelti i modelli neurofuzzy come base di sviluppo di nuovi sistemi di modellamento che utilizzano anche le più recenti tecniche di ottimizzazione strutturale. In altre parole, la sintesi dei parametri del modello è integrata dalla determinazione automatica della complessità strutturale del modello stesso (ossia del numero dei suoi parametri), affinché ne sia massimizzata la capacità di generalizzazione. E’ bene precisare che, per ragioni di sintesi e di chiarezza espositiva, in questa tesi non sarà presentata tutta l’attività svolta nei tre anni di dottorato. Si è preferito concentrare l’attenzione su un particolare sistema di modellamento e sulla procedura di sintesi per esso realizzata, in modo tale da evidenziare i contributi più originali e significativi che hanno caratterizzato il lavoro di ricerca

    Fuzzy Clustering Using the Convex Hull as Geometrical Model

    Get PDF
    A new approach to fuzzy clustering is proposed in this paper. It aims to relax some constraints imposed by known algorithms using a generalized geometrical model for clusters that is based on the convex hull computation. A method is also proposed in order to determine suitable membership functions and hence to represent fuzzy clusters based on the adopted geometrical model. The convex hull is not only used at the end of clustering analysis for the geometric data interpretation but also used during the fuzzy data partitioning within an online sequential procedure in order to calculate the membership function. Consequently, a pure fuzzy clustering algorithm is obtained where clusters are fitted to the data distribution by means of the fuzzy membership of patterns to each cluster. The numerical results reported in the paper show the validity and the efficacy of the proposed approach with respect to other well-known clustering algorithms

    Multi-damage detection in composite space structures via deep learning

    Get PDF
    The diagnostics of environmentally induced damages in composite structures plays a critical role for ensuring the operational safety of space platforms. Recently, spacecraft have been equipped with lightweight and very large substructures, such as antennas and solar panels, to meet the performance demands of modern payloads and scientific instruments. Due to their large surface, these components are more susceptible to impacts from orbital debris compared to other satellite locations. However, the detection of debris-induced damages still proves challenging in large structures due to minimal alterations in the spacecraft global dynamics and calls for advanced structural health monitoring solutions. To address this issue, a data-driven methodology using Long Short-Term Memory (LSTM) networks is applied here to the case of damaged solar arrays. Finite element models of the solar panels are used to reproduce damage locations, which are selected based on the most critical risk areas in the structures. The modal parameters of the healthy and damaged arrays are extracted to build the governing equations of the flexible spacecraft. Standard attitude manoeuvres are simulated to generate two datasets, one including local accelerations and the other consisting of piezoelectric voltages, both measured in specific locations of the structure. The LSTM architecture is then trained by associating each sensed time series with the corresponding damage label. The performance of the deep learning approach is assessed, and a comparison is presented between the accuracy of the two distinct sets of sensors: accelerometers and piezoelectric patches. In both cases, the framework proved effective in promptly identifying the location of damaged elements within limited measured time samples

    On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks

    Full text link
    A change of the prevalent supervised learning techniques is foreseeable in the near future: from the complex, computational expensive algorithms to more flexible and elementary training ones. The strong revitalization of randomized algorithms can be framed in this prospect steering. We recently proposed a model for distributed classification based on randomized neural networks and hyperdimensional computing, which takes into account cost of information exchange between agents using compression. The use of compression is important as it addresses the issues related to the communication bottleneck, however, the original approach is rigid in the way the compression is used. Therefore, in this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.Comment: 12 pages, 3 figure

    A General Approach to Dropout in Quantum Neural Networks

    Full text link
    In classical Machine Learning, "overfitting" is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in Machine Learning is the so called "dropout", which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of Quantum Neural Networks as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here we present a generalized approach to apply the dropout technique in Quantum Neural Network models, defining and analysing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. Our study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the Quantum Neural Networks model, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations, and may pave the way to efficiently employing deep Quantum Machine Learning models based on state-of-the-art Quantum Neural Networks

    Deep Neural Networks for Multivariate Prediction of Photovoltaic Power Time Series

    Get PDF
    The large-scale penetration of renewable energy sources is forcing the transition towards the future electricity networks modeled on the smart grid paradigm, where energy clusters call for new methodologies for the dynamic energy management of distributed energy resources and foster to form partnerships and overcome integration barriers. The prediction of energy production of renewable energy sources, in particular photovoltaic plants that suffer from being highly intermittent, is a fundamental tool in the modern management of electrical grids shifting from reactive to proactive, with also the help of advanced monitoring systems, data analytics and advanced demand side management programs. The gradual move towards a smart grid environment impacts not only the operating control/management of the grid, but also the electricity market. The focus of this article is on advanced methods for predicting photovoltaic energy output that prove, through their accuracy and robustness, to be useful tools for an efficient system management, even at prosumer's level and for improving the resilience of smart grids. Four different deep neural models for the multivariate prediction of energy time series are proposed; all of them are based on the Long Short-Term Memory network, which is a type of recurrent neural network able to deal with long-term dependencies. Additionally, two of these models also use Convolutional Neural Networks to obtain higher levels of abstraction, since they allow to combine and filter different time series considering all the available information. The proposed models are applied to real-world energy problems to assess their performance and they are compared with respect to the classic univariate approach that is used as a reference benchmark. The significance of this work is to show that, once trained, the proposed deep neural networks ensure their applicability in real online scenarios characterized by high variability of data, without requiring retraining and end-user's tricks
    • …
    corecore